Every April, we traditionally bring you a roundup of the most ironic, absurd, and downright paradoxical incidents in information security. In this special edition – a CISO-level data leak, crypto wallets drained after a tax office photo op, an AI-enabled army of spying vacuum cleaners – and more. As always, commentary is provided by Sergio Bertoni, Lead Analyst at SearchInform.


What happened: The Acting Director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA) uploaded internal documents to the public version of ChatGPT.
How it happened: Madhu Gottumukkala, head of CISA, uploaded at least four documents labeled For Official Use Only into ChatGPT last summer. While not classified, the materials were restricted and not intended for public disclosure.
The activity was flagged by automated data loss prevention (DLP) systems designed to detect unauthorized data transfers. The incident was identified in August, prompting the Department of Homeland Security (DHS) to initiate a damage assessment.
By default, access to OpenAI tools is blocked on CISA-issued devices. Staff are expected to use approved alternatives such as DHSChat – a solution designed to ensure that queries and documents remain within federal networks.
However, shortly after assuming his role, Gottumukkala requested – and received – special access to ChatGPT. Such exceptions are granted temporarily and under strict controls. Despite this, he still uploaded sensitive internal data to a public service – an ironic misstep given both his role and the agency’s mission.
Sergio Bertoni comments: This is a textbook case of “negative selection” – only worse. We’re not dealing with a rogue employee, but with an authorized executive who explicitly requested expanded access.
Senior leadership inherently carries elevated privileges and broader access to sensitive systems – meaning their mistakes have amplified impact. That’s why executives must be placed under enhanced monitoring – with clear visibility into what data they handle and which services they use.
In cybersecurity, a single careless click at the top can negate the work of an entire security team.

What happened: Attackers exploited publicly exposed servers running local AI models.
How it happened: Ollama – a tool for running local LLMs – was found deployed on more than 175,000 internet-accessible systems worldwide. While it defaults to localhost-only access, users manually reconfigured it to listen on all interfaces – without enabling authentication.
As a result, anyone could connect to these servers and use their resources.
This phenomenon, known as LLMjacking, allows attackers to hijack compute power, bandwidth, and electricity for their own purposes – from running resource-intensive models to enabling gray-market AI infrastructure.
Many affected systems use residential IP addresses, making detection and attribution more difficult. Some also run uncensored models, further increasing the risk of abuse.
Importantly, Ollama itself has no inherent vulnerability – the issue stems entirely from insecure configurations.
Sergio Bertoni comments: I have a soft spot for cases where users extend a tool without bothering to understand how it works. Ollama is secure out of the box – localhost only. But someone decided it would be more convenient to open it up – and skip authentication altogether.
The result – users didn’t just expose their own machines, they effectively donated their computing resources to attackers.
We’ve seen this before. Even large-scale services neglect basic controls. Take the Chat & Ask AI app – its stored data in Google Firebase without proper access checks. Anyone could impersonate a user and access internal storage – exposing around 300 million messages from 25 million users.
The takeaway is straightforward – control not just access to public AI services, but also how local tools are deployed. Roll them out centrally, enforce authentication, and monitor usage. Changing localhost to 0.0.0.0 takes seconds – the consequences can last for years.

What happened: A data leak at a fourth-tier contractor nearly exposed internal systems of more than 200 airports worldwide.
How it happened: The leak was identified by supply chain security platform SVigil while monitoring underground forums. Exposed credentials belonged to a system engineer at a fourth-level contractor – a subcontractor of a subcontractor of the primary IT vendor.
These credentials provided access to a central operational support portal used across more than 200 airports. Two-factor authentication (2FA) was not enabled.
Researchers were able to access infrastructure data in real time, including:
In the wrong hands, this could have enabled attackers to disrupt terminals during peak hours, interfere with baggage reconciliation systems, or coordinate attacks across multiple hubs – with potential losses in the hundreds of millions of dollars.
Sergio Bertoni comments: On one side – global aviation infrastructure. On the other – a fourth-tier contractor that simply didn’t enable 2FA.
It brings to mind the California “talking traffic lights” case, where default passwords were never changed. The vendor warned about it – no one followed through.
In airports, the stakes are far higher. This isn’t just financial risk – it’s human safety. And all of it could have been prevented by enabling multi-factor authentication.

What happened: South Korean tax authorities accidentally exposed a crypto wallet seed phrase – leading to a $4.8M loss.
How it happened: In a press release about asset seizures, officials published photos that inadvertently revealed a Ledger wallet along with its seed phrase – effectively the master key to the funds.
The following day, 4 million PRTG tokens worth $4.8M were withdrawn from the wallet.
It remains unclear whether attackers exploited the exposed seed phrase or the owner moved the funds. However, the transaction pattern suggests a deliberate approach – a small amount of Ethereum was first deposited to cover fees, followed by three structured withdrawals.
Sergio Bertoni comments: It’s hard to say what’s more remarkable – that no one noticed during filming, during editing, or during publication. Three layers of review – all failed.
Most likely, no one involved understood what a seed phrase is or why it matters. But you don’t need deep technical expertise to follow basic rules – don’t expose sensitive data, and always sanitize materials before publication.
Don’t assume everyone is security-aware – establish clear policies and build awareness across the entire organization.

What happened: Dutch police arrested a man after accidentally giving him access to confidential documents – and he refused to delete them.
How it happened: A resident of Ridderkerk contacted police with potential evidence. An officer attempted to send him an upload link – but mistakenly sent a download link granting access to internal files.
The man simply followed the link and downloaded everything available. When asked to delete the files, he refused and demanded compensation.
Although the police acknowledged their own error, the man was arrested for unauthorized access to a computer system. Authorities stated that the seizure of his devices was necessary to prevent further dissemination of the data.
Sergio Bertoni comments: Not all incidents involve sophisticated attacks – very often, simple human error is enough.
These risks can’t be eliminated entirely, so mitigation is key – deploy DLP solutions, monitor data flows, and enforce controls regardless of intent.
Automation helps neutralize the most unpredictable element in any security system – the human factor.

What happened: An enthusiast gained access to more than 7,000 robot vacuum cleaners worldwide.
How it happened: Programmer Sammy Azzoufal reverse-engineered his DJI vacuum cleaner using an AI coding assistant. When his custom app connected to DJI servers, it unexpectedly interacted with thousands of devices across 24 countries.
He was able to:
The root cause was simple – the MQTT broker used by DJI did not enforce access controls. Once authenticated with a single device token, it was possible to observe traffic from other devices.
Sergio Bertoni comments: There’s an old saying – just because you’re paranoid doesn’t mean you’re not being watched. In this case, you actually are – by your own vacuum cleaner.
Smart devices collect increasing amounts of data. Where is it stored? How is it protected? And who can access it?
Here, access control was either misconfigured or missing entirely. What’s especially telling is that the issue was discovered not by attackers, but by a regular user – using AI tools.
The barrier to entry for both vulnerability discovery and exploitation is getting lower.

What happened: A Spanish man exploited a booking platform vulnerability to stay in hotels for €0.01.
How it happened: He discovered a flaw in the payment confirmation process that allowed bookings to appear fully paid, while only transferring one cent to hotels.
He repeatedly booked rooms worth around €1,000 per night and also consumed minibar contents without paying.
The platform detected discrepancies during financial reconciliation and reported the activity to police.
In February 2026, the suspect was arrested in a Madrid hotel where he had booked a four-night stay using the same method. The damage to that hotel alone is estimated at €20,000.
Sergio Bertoni comments: We’re seeing more cases where systems are compromised not through classic exploits, but through logic abuse.
For example, a chatbot at a UK marketplace was convinced to grant an 80% discount – costing the company thousands.
Security testing should go beyond SQL injections and XSS. You also need to assess how your system can be manipulated through normal usage scenarios.
Even well-intentioned systems can introduce risk. In Spain, a new accident alert system broadcasts data via open API – criminals now use it to arrive at crash sites before emergency services and extort drivers.
Can you eliminate all risks? No. But in cybersecurity, the only option is to keep improving.
Security Tip of the Month: April’s incidents highlight a recurring pattern – the most critical risks often originate from within. Whether it’s excessive privileges, misconfigured AI tools, or human error, insider-driven exposure remains one of the hardest threats to control.
SearchInform Risk Monitor helps detect risky user behavior, privilege abuse, and policy violations in real time, while FileAuditor (DCAP) provides full visibility into access to sensitive data and critical systems – enabling organizations to prevent incidents before they turn into breaches.
Subscribe to our newsletter and receive a bright and useful tutorial Explaining Information Security in 4 steps!
Subscribe to our newsletter and receive case studies in comics!